feat(stack): reclaim leaked dev k3d networks on obol stack purge#393
Closed
OisinKyne wants to merge 1 commit intooisin/377-3from
Closed
feat(stack): reclaim leaked dev k3d networks on obol stack purge#393OisinKyne wants to merge 1 commit intooisin/377-3from
OisinKyne wants to merge 1 commit intooisin/377-3from
Conversation
Collaborator
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Each
k3d cluster createreserves a /16 from Docker's predefined172.16.0.0/12 pool (~16 networks max). In OBOL_DEVELOPMENT mode the
pull-through registry-mirror containers
(k3d-obol-{docker,ghcr,quay}-io.localhost) attach to every cluster
network and are NOT disconnected by
k3d cluster delete, so thenetwork leaks. After ~16 dev cycles the pool exhausts and every
obol stack upfails with "all predefined address pools have beenfully subnetted".
flows/lib.sh has had
cleanup_k3d_obol_networksfor this sincec08c873/34e62e5; lift the same logic into Go and run it from Purge.
obol stack downis intentionally not touched — it callsk3d cluster stopwhich preserves the network forUpto resume.The rare graceful-stop-fallback-to-delete path inside Down is
covered the next time Purge runs.
Live clusters (any
*-serverlbor*-server-Nattachment) areskipped, so this is safe alongside other running stacks. Mirror
containers auto-rejoin the next cluster's network on the next
obol stack up, so disconnecting them is non-destructive for thecache.
Co-Authored-By: Claude Opus 4.7 (1M context) noreply@anthropic.com